accelerated decentralized stochastic proximal algorithm
Reviews: An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums
Motivated by the APCG method for empirical risk minimization, this paper proposed an an accelerated decentralized stochastic algorithm for finite sums. An augmented communication graph is proposed such that the original constrained optimization problem can be transformed to its dual formulation, which can be solved by stochastic coordinate descent. The proposed method employs randomized pairwise communication and stochastic computation. An adaptive sampling scheme for selecting edge is introduced, which is analogous to importance sampling in the literature. The theoretical analysis shows that the proposed algorithm achieves an optimal linear convergence rate for finite-sums, and the time complexity is better than the existing results.
Reviews: An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums
Optimization under communication constraints is an important area of machine learning and there is still much to be gained from rigorous research in this area. Statistical literature does not concern itself much with inter- and intra-processor communication; nor does optimization (operation research) literature. Machine learning is a natural community for this research to be advanced. This paper uses a randomization scheme for communicating information between nodes to achieve impressive learning rates. The authors derive theoretical bounds on the estimation error induced by the communication constraint and show that they achieve improvements over state-of-the-art approaches with some empirical experiments.